The ability to jointly learn from multiple modalities, such as text, audio, and visual data, is a defining feature of intelligent systems. While there have been promising advances in designing neural networks to harness multimodal data, the enormous success of data augmentation currently remains limited to single-modality tasks like image classification. Indeed, it is particularly difficult to augment each modality while preserving the overall semantic structure of the data; for example, a caption may no longer be a good description of an image after standard augmentations have been applied, such as translation. Moreover, it is challenging to specify reasonable transformations that are not tailored to a particular modality. In this paper, we introduce LeMDA, Learning Multimodal Data Augmentation, an easy-to-use method that automatically learns to jointly augment multimodal data in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that LeMDA can (1) profoundly improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
感觉到航天器的三维(3D)结构是成功执行许多轨道空间任务的先决条件,并且可以为许多下游视觉算法提供关键的输入。在本文中,我们建议使用光检测和范围传感器(LIDAR)和单眼相机感知航天器的3D结构。为此,提出了航天器深度完成网络(SDCNET),以根据灰色图像和稀疏深度图回收密集的深度图。具体而言,SDCNET将对象级航天器的深度完成任务分解为前景分割子任务和前景深度完成子任务,该任务首先将航天器区域划分,然后在段前景区域执行深度完成。这样,有效地避免了对前景航天器深度完成的背景干扰。此外,还提出了一个基于注意力的特征融合模块,以汇总不同输入之间的互补信息,该信息可以按顺序推论沿通道沿着不同特征和空间维度之间的相关性。此外,还提出了四个指标来评估对象级的深度完成性能,这可以更直观地反映航天器深度完成结果的质量。最后,构建了一个大规模的卫星深度完成数据集,用于培训和测试航天器深度完成算法。数据集上的经验实验证明了拟议的SDCNET的有效性,该SDCNET达到了0.25亿的平均绝对误差和0.759m的平均绝对截断误差,并通过较大的边缘超过了前期方法。航天器姿势估计实验也基于深度完成结果进行,实验结果表明,预测的密集深度图可以满足下游视觉任务的需求。
translated by 谷歌翻译
3D场景感性风格化旨在根据给定的样式图像从任意新颖的视图中生成光真逼真的图像,同时在从不同观点呈现时确保一致性。一些带有神经辐射场的现有风格化方法可以通过将样式图像的特征与多视图图像结合到训练3D场景来有效地预测风格化的场景。但是,这些方法生成了包含令人反感的伪影的新型视图图像。此外,他们无法为3D场景实现普遍的影迷风格化。因此,样式图像必须根据神经辐射场重新训练3D场景表示网络。我们提出了一个新颖的3D场景,逼真的风格转移框架来解决这些问题。它可以通过2D样式图像实现感性3D场景样式转移。我们首先预先训练了2D逼真的样式传输网络,该网络可以符合任何给定内容图像和样式图像之间的影片风格转移。然后,我们使用体素特征来优化3D场景并获得场景的几何表示。最后,我们共同优化了一个超级网络,以实现场景的逼真风格传输的任意样式图像。在转移阶段,我们使用预先训练的2D影视网络来限制3D场景中不同视图和不同样式图像的感性风格。实验结果表明,我们的方法不仅实现了任意样式图像的3D影像风格转移,而且还优于视觉质量和一致性方面的现有方法。项目页面:https://semchan.github.io/upst_nerf。
translated by 谷歌翻译
被遮挡的人重新识别(RE-ID)旨在解决跨多个摄像机感兴趣的人时解决遮挡问题。随着深度学习技术的促进和对智能视频监视的需求的不断增长,现实世界应用中的频繁闭塞使闭塞的人重新引起了研究人员的极大兴趣。已经提出了大量封闭的人重新ID方法,而很少有针对遮挡的调查。为了填补这一空白并有助于提高未来的研究,本文提供了对封闭者重新ID的系统调查。通过对人体闭塞的深入分析,发现大多数现有方法仅考虑一部分闭塞问题。因此,我们从问题和解决方案的角度回顾了与闭塞相关的人重新ID方法。我们总结了个人重新闭塞引起的四个问题,即位置错位,规模错位,嘈杂的信息和缺失的信息。然后对解决不同问题的闭塞相关方法进行分类和引入。之后,我们总结并比较了四个流行数据集上最近被遮挡的人重新ID方法的性能:部分reid,部分易边,咬合 - 固定和遮挡的dukemtmc。最后,我们提供了有关有希望的未来研究方向的见解。
translated by 谷歌翻译
深度学习方法论为高光谱图像(HSI)分析社区的发展做出了很大贡献。但是,这也使HSI分析系统容易受到对抗攻击的影响。为此,我们在本文中提出了一个掩盖的空间光谱自动编码器(MSSA),根据自我监督的学习理论,以增强HSI分析系统的鲁棒性。首先,进行了一个掩盖的序列注意学习模块,以促进沿光谱通道的HSI分析系统的固有鲁棒性。然后,我们开发了一个具有可学习的图形结构的图形卷积网络,以建立全局像素的组合。这样,每种组合中的所有相关像素都可以分散攻击效果,并且在空间方面可以实现更好的防御性能。最后,为了提高防御能力并解决有限标记样品的问题,MSSA采用光谱重建作为借口任务,并以自我监督的方式适合数据集。 - 高光谱分类方法和代表性的对抗防御策略。
translated by 谷歌翻译
为了在多个机器人系统中有效完成任务,必须解决的问题是同时定位和映射(SLAM)。激光雷达(光检测和范围)由于其出色的精度而用于许多SLAM解决方案,但其性能在无特征环境(如隧道或长走廊)中降低。集中式大满贯解决了云服务器的问题,云服务器需要大量的计算资源,并且缺乏针对中央节点故障的鲁棒性。为了解决这些问题,我们提出了一个分布式的SLAM解决方案,以使用超宽带(UWB)范围和探测测量值估算一组机器人的轨迹。所提出的方法在机器人团队之间分配了处理,并显着减轻了从集中式大满贯出现的计算问题。我们的解决方案通过最大程度地减少在机器人处于近距离接近时在不同位置进行的UWB范围测量方法来确定两个机器人之间的相对姿势(也称为环闭合)。 UWB在视线条件下提供了良好的距离度量,但是由于机器人的噪声和不可预测的路径,检索精确的姿势估计仍然是一个挑战。为了处理可疑的循环封闭,我们使用成对的一致性最大化(PCM)来检查循环封闭质量并执行异常拒绝。然后,在分布式姿势图优化(DPGO)模块中将过滤的环闭合与探光仪融合,以恢复机器人团队的完整轨迹。进行了广泛的实验以验证所提出的方法的有效性。
translated by 谷歌翻译
为了以计算有效的方式部署深层模型,经常使用模型量化方法。此外,由于新的硬件支持混合的位算术操作,最近对混合精度量化(MPQ)的研究开始通过搜索网络中不同层和模块的优化位低宽,从而完全利用表示的能力。但是,先前的研究主要是在使用强化学习,神经体系结构搜索等的昂贵方案中搜索MPQ策略,或者简单地利用部分先验知识来进行位于刻度分配,这可能是有偏见和优势的。在这项工作中,我们提出了一种新颖的随机量化量化(SDQ)方法,该方法可以在更灵活,更全球优化的空间中自动学习MPQ策略,并具有更平滑的梯度近似。特别是,可区分的位宽参数(DBP)被用作相邻位意选择之间随机量化的概率因素。在获取最佳MPQ策略之后,我们将进一步训练网络使用熵感知的bin正则化和知识蒸馏。我们广泛评估了不同硬件(GPU和FPGA)和数据集的多个网络的方法。 SDQ的表现优于所有最先进的混合或单个精度量化,甚至比较低的位置量化,甚至比各种重新网络和Mobilenet家族的全精度对应物更好,这表明了我们方法的有效性和优势。
translated by 谷歌翻译
本文探讨了从视觉变压器查找最佳子模型的可行性,并引入了纯Vision变压器减肥(VIT-SLIM)框架,可以在跨多个维度从原始模型的端到端搜索这样的子结构,包括输入令牌,MHSA和MLP模块,具有最先进的性能。我们的方法基于学习和统一的L1稀疏限制,具有预定的因素,以反映不同维度的连续搜索空间中的全局重要性。通过单次训练方案,搜索过程非常有效。例如,在DeIT-S中,VIT-SLIM仅需要〜43 GPU小时进行搜索过程,并且搜索结构具有灵活的不同模块中的多维尺寸。然后,根据运行设备上的精度折叠折衷的要求采用预算阈值,并执行重新训练过程以获得最终模型。广泛的实验表明,我们的耐比可以压缩高达40%的参数和40%的视觉变压器上的40%拖鞋,同时在Imagenet上提高了〜0.6%的精度。我们还展示了我们搜索模型在几个下游数据集中的优势。我们的源代码将公开提供。
translated by 谷歌翻译